月度归档:2015年01月

FreeBSD下 终端输入中文方法

 csh:
setenv LANG en_US.UTF-8

bash:
在securecrt等终端中输入中文
LANG=zh;export LANG

在$HOME/.profile中或/etc/profile中加入
stty cs8 -istrip

到这里,如果shell是sh的话就可以输入中文了,如果shell是bash还要做如下:

在$HOME/.inputrc中
set meta-flag on
set output-meta on
set convert-meta off

python下大奖章量化接口 模拟交易测试成功

python3.4 大奖章量化接口1.10

先看看成功案例的样子:

Python 3.4.1 |Continuum Analytics, Inc.| (default, May 19 2014, 13:04:39) on Windows (32 bits).
This is the IEP interpreter with integrated event loop for PYSIDE.

Using IPython 2.1.0 — An enhanced Interactive Python.
?         -> Introduction and overview of IPython’s features.
%quickref -> Quick reference.
help      -> Python’s own help system.
object?   -> Details about ‘object’, use ‘object??’ for extra details.

 

In [3]:  from WindPy import *

In [4]: import datetime

In [5]: w.start()
Welcome to use Wind Quant API 1.0 for Python (WindPy)!
You can use w.menu to help yourself to create commands(WSD,WSS,WST,WSI,WSQ,…)!

COPYRIGHT (C) 2013 WIND HONGHUI INFORMATION & TECHKNOLEWDGE CO., LTD. ALL RIGHTS RESERVED.
IN NO CIRCUMSTANCE SHALL WIND BE RESPONSIBLE FOR ANY DAMAGES OR LOSSES CAUSED BY USING WIND QUANT API 1.0 FOR Python.
Out[5]:
.ErrorCode=0
.Data=[[[],[‘],[O],[K],[!],[‘],[]]]

In [6]: dct1=w.tlogon(“00000010″,”0″,”M:15853799XXX01″,”123456″,”SHSZ”)

 

In [9]: dct1.Data
Out[9]: [[2], [‘M:15853799XXX01′], [‘SZSHA’], [0], [”]]

 

In [11]: w.torder(“600030.SH”,”Buy”,”0″,”200″,”OrderType=B5TC;LogonID=2″)
Out[11]:
.ErrorCode=0
.Fields=[[,’,R,e,q,u,e,s,t,I,…]
.Data=[[[],[[],[5],[]],[,],[ ],[[],[‘],[6],[0],…]

In [12]: dct=w.torder(“600030.SH”,”Buy”,”0″,”200″,”OrderType=B5TC;LogonID=2″)

 

In [14]: dct.Data
Out[14]:
[[6],
[‘600030.SH’],
[‘1′],
[‘0′],
[‘200′],
[‘B5TC’],
[‘2′],
[0],
[‘Sending …’]]

In [15]:

然后查看模拟账户,发现400股中信证券静静的躺在那里:

证券代码 证券名称 证券数量 可卖数量 成本价 最新价 最新市值 浮动盈亏 盈亏率(%) 股东代码
600030 中信证券 400 0 359882 352500 14100 -295 -2.05% A000002215

回过头来讲讲苦难的历程,都说好事多磨,刚开始测试的时候报错:

In [16]: dct1=w.tlogon(“00000010″,”0″,”M:15853799xxx01″,”123456″,”SHSZ”)

 

In [17]: dct1.Data

Out[17]:

[[0],

[‘M:15853799XXX01′],

[‘SZSHA’],

[-40530102],

[‘200登录失败:资金账户M:15853799XXX01密码错误!’]]

出现这样的错误真的让人很无奈,根本不指导问题在哪里啊! 后来向大奖章求助,小疯猪提到了Wind密码问题,我在w.start()的时候一直用的用户名+密码,但是小疯猪提到的是手机号+密码,因此到了这里我就感觉有点眉目了。 后来重置了手机号+密码,在w.start()的时候,用手机号登陆,模拟交易的密码错误问题就解决了。

感谢大奖章,也希望上面的经历能帮助朋友们!

 

scrapy 初学

http://scrapy-chs.readthedocs.org/zh_CN/0.24/intro/tutorial.html

 

 

D:work>scrapy startproject tutorial
C:Python27libsite-packagestwistedinternet_sslverify.py:184: UserWarning: Y
ou do not have the service_identity module installed. Please install it from <ht
tps://pypi.python.org/pypi/service_identity>. Without the service_identity modul
e and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimenta
ry TLS client hostnameverification. Many valid certificate/hostname mappings ma
y be rejected.
verifyHostname, VerificationError = _selectVerifyImplementation()
New Scrapy project ‘tutorial’ created in:
D:worktutorial

You can start your first spider with:
cd tutorial
scrapy genspider example example.com

按照手册:

D:worktutorial>scrapy genspider dmoz dmoz.org
C:Python27libsite-packagestwistedinternet_sslverify.py:184: UserWarning: Y
ou do not have the service_identity module installed. Please install it from <ht
tps://pypi.python.org/pypi/service_identity>. Without the service_identity modul
e and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimenta
ry TLS client hostnameverification. Many valid certificate/hostname mappings ma
y be rejected.
verifyHostname, VerificationError = _selectVerifyImplementation()
Created spider ‘dmoz’ using template ‘basic’ in module:
tutorial.spiders.dmoz

 

运行报错:

D:worktutorial>scrapy crawl dmoz
C:Python27libsite-packagestwistedinternet_sslverify.py:184: UserWarning: Y
ou do not have the service_identity module installed. Please install it from <ht
tps://pypi.python.org/pypi/service_identity>. Without the service_identity modul
e and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimenta
ry TLS client hostnameverification. Many valid certificate/hostname mappings ma
y be rejected.
verifyHostname, VerificationError = _selectVerifyImplementation()
2015-01-06 17:01:25+0800 [scrapy] INFO: Scrapy 0.24.4 started (bot: tutorial)
2015-01-06 17:01:25+0800 [scrapy] INFO: Optional features available: ssl, http11

2015-01-06 17:01:25+0800 [scrapy] INFO: Overridden settings: {‘NEWSPIDER_MODULE’
: ‘tutorial.spiders’, ‘SPIDER_MODULES': [‘tutorial.spiders’], ‘BOT_NAME': ‘tutor
ial’}
2015-01-06 17:01:25+0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetCons
ole, CloseSpider, WebService, CoreStats, SpiderState
Traceback (most recent call last):
File “C:Python27librunpy.py”, line 162, in _run_module_as_main
“__main__”, fname, loader, pkg_name)
File “C:Python27librunpy.py”, line 72, in _run_code
exec code in run_globals
File “C:Python27Scriptsscrapy.exe__main__.py”, line 9, in <module>
File “C:Python27libsite-packagesscrapycmdline.py”, line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File “C:Python27libsite-packagesscrapycmdline.py”, line 89, in _run_print
_help
func(*a, **kw)
File “C:Python27libsite-packagesscrapycmdline.py”, line 150, in _run_comm
and
cmd.run(args, opts)
File “C:Python27libsite-packagesscrapycommandscrawl.py”, line 60, in run

self.crawler_process.start()
File “C:Python27libsite-packagesscrapycrawler.py”, line 92, in start
if self.start_crawling():
File “C:Python27libsite-packagesscrapycrawler.py”, line 124, in start_cra
wling
return self._start_crawler() is not None
File “C:Python27libsite-packagesscrapycrawler.py”, line 139, in _start_cr
awler
crawler.configure()
File “C:Python27libsite-packagesscrapycrawler.py”, line 47, in configure
self.engine = ExecutionEngine(self, self._spider_closed)
File “C:Python27libsite-packagesscrapycoreengine.py”, line 64, in __init
__
self.downloader = downloader_cls(crawler)
File “C:Python27libsite-packagesscrapycoredownloader__init__.py”, line
73, in __init__
self.handlers = DownloadHandlers(crawler)
File “C:Python27libsite-packagesscrapycoredownloaderhandlers__init__.p
y”, line 22, in __init__
cls = load_object(clspath)
File “C:Python27libsite-packagesscrapyutilsmisc.py”, line 42, in load_ob
ject
raise ImportError(“Error loading object ‘%s': %s” % (path, e))
ImportError: Error loading object ‘scrapy.core.downloader.handlers.s3.S3Download
Handler': No module named win32api

2015.1.15日补充:

针对“No module named win32api”报错,网上找解决方案:

 

出现No module named win32api异常,到这里下载对应版本的安装模块

http://sourceforge.net/projects/pywin32/files/pywin32/Build%20219/

下载后,安装。重新执行:

D:worktutorial>scrapy crawl dmoz
C:Python27libsite-packagestwistedinternet_sslverify.py:184: UserWarning: Y
ou do not have the service_identity module installed. Please install it from <ht
tps://pypi.python.org/pypi/service_identity>. Without the service_identity modul
e and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimenta
ry TLS client hostnameverification. Many valid certificate/hostname mappings ma
y be rejected.

。。。。。。

2015-01-15 15:16:03+0800 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6
023
2015-01-15 15:16:03+0800 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080

2015-01-15 15:16:05+0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/>
(referer: None)
2015-01-15 15:16:05+0800 [dmoz] INFO: Closing spider (finished)
2015-01-15 15:16:05+0800 [dmoz] INFO: Dumping Scrapy stats:
{‘downloader/request_bytes': 211,
‘downloader/request_count': 1,
‘downloader/request_method_count/GET': 1,
‘downloader/response_bytes': 6865,
‘downloader/response_count': 1,
‘downloader/response_status_count/200′: 1,
‘finish_reason': ‘finished’,
‘finish_time': datetime.datetime(2015, 1, 15, 7, 16, 5, 501000),
‘log_count/DEBUG': 3,
‘log_count/INFO': 7,
‘response_received_count': 1,
‘scheduler/dequeued': 1,
‘scheduler/dequeued/memory': 1,
‘scheduler/enqueued': 1,
‘scheduler/enqueued/memory': 1,
‘start_time': datetime.datetime(2015, 1, 15, 7, 16, 3, 898000)}
2015-01-15 15:16:05+0800 [dmoz] INFO: Spider closed (finished)

针对这个报错“UserWarning: You do not have the service_identity module installed . Please install it from  https://pypi.python.org/pypi/service_identity”

网络查找解决方案,需要安装 pyOpenSSL

经检查已经安装。原来需要安装“service_identity”

D:worktutorial>pip install service_identity
Downloading/unpacking service-identity
Downloading service_identity-14.0.0-py2.py3-none-any.whl

…….

Running setup.py install for pyasn1-modules

Successfully installed service-identity pyasn1-modules characteristic
Cleaning up…

其中这个文档service_identity-14.0.0-py2.py3-none-any.whl

也是可以单独下载的:

https://pypi.python.org/pypi/service_identity#downloads

再运行,就没有那个报错了!

 

 

在Shell中尝试Selector选择器

D:worktutorial>scrapy shell “http://www.dmoz.org/Computers/Programming/Languag
es/Python/Books/”

 

在shell中执行:

>>> response.xpath(‘//title’)
[<Selector xpath=’//title’ data=u'<title>DMOZ – Computers: Programming: La’>]
>>> response.xpath(‘//title’).extract()
File “<console>”, line 1
response.xpath(‘//title’).extract()
^
IndentationError: unexpected indent
>>> response.xpath(‘//title’).extract()
[u'<title>DMOZ – Computers: Programming: Languages: Python: Books</title>’]
>>> response.xpath(‘//title/text()’)
[<Selector xpath=’//title/text()’ data=u’DMOZ – Computers: Programming: Language
s’>]
>>> response.xpath(‘//title/text()’).re(‘(w+):’)
[u’Computers’, u’Programming’, u’Languages’, u’Python’]

我们可以通过这段代码选择该页面中网站列表里所有 <li> 元素:

sel.xpath(‘//ul/li’)
网站的描述:

sel.xpath(‘//ul/li/text()’).extract()
网站的标题:

sel.xpath(‘//ul/li/a/text()’).extract()
以及网站的链接:

sel.xpath(‘//ul/li/a/@href’).extract()

 

修改domz.py:


 

然后输出好多!